Cs 229r: Algorithms for Big Data 2 Sparse Jl from Last Time

نویسندگان

  • Jelani Nelson
  • Rachit Singh
چکیده

In the last lecture we discussed how distributional JL implies Gordon's theorem, and began our discussion of sparse JL. We wrote Πx 2 = σ T A T x A x σ and bounded the expression using Hanson-Wright in terms of the Frobenius norm. In this lecture we'll bound that Frobenius norm and then discuss applications to fast nearest neighbors. Note that we defined B x = A T x A x as the center of the product from before, but with the diagonals zeroed out. B x is a block-diagonal matrix with m blocks

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cs 229r: Algorithms for Big Data 2 Dimensionality Reduction 2.2 Limitations of Dimensionality Reduction

In the last lecture we proved several space lower bounds for streaming algorithms using the communication complexity model, and some ideas from information theory. In this lecture we will move onto the next topic: dimensionality reduction. Dimensionality reduction is useful when solving high-dimensional computational geometry problems , such as: • clustering • nearest neighbors search • numeric...

متن کامل

Cs 229r: Algorithms for Big Data Lecture 03, September 11th 2 Main Section 2.1 Space Lower Bounds for Streaming Algorithms

Proof. We are going to make an information-theoretic argument. Using a streaming algorithm with space s for the current problem we are going to show how to encode {0, 1}n using only s bits. In other words, we are going to construct an injective mapping from {0, 1}n to {0, 1}s. So, this implies that s must be at least n and we are done. We look for procedures Dec,Enc such that ∀xDec(Enc(x)) = x ...

متن کامل

Cs 229r: Algorithms for Big Data Lecture October 13th 3 Main Section 3.1 Lower Bounds on Dimensionality Reduction 3.1.1 Lower Bounds on Distributional Johnson Lindenstrauss Lemma

The first lower bound was proved by Jayram and Woodruff [1] and then by Kane,Meka,Nelson [2]. The lower bound tells that any ( , δ)-DJL for , δ ∈ (0, 1/2) must have m = Ω(min{n, −2 log(1δ )}). The second proof builds on the following idea: Since for all x we have the probabilistic guarantee PΠ∼D ,δ [|‖Πx‖2−1| < max{ , 2}] < δ, then it is true also for any distribution over x. We are going to pi...

متن کامل

CS 229r Notes

A lot of the course material is new, within the last 10 years, and since the area is so ripe for contributions, the project will be cutting-edge. There are three possible routes: Make a theoretical contribution to some topic in the course; write a survey covering many related papers in an area (one option is to be risky and try to make a contribution and fall back on a survey if you’re not succ...

متن کامل

Comparative Analysis of Sparse Signal Reconstruction Algorithms for Compressed Sensing

Compressed sensing (CS) is a rapidly growing field, attracting considerable attention in many areas from imaging to communication and control systems. This signal processing framework is based on the reconstruction of signals, which are sparse in some domain, from a very small data collection of linear projections of the signal. The solution to the underdetermined linear system, resulting from ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015